Obtaining fast error rates in nonconvex situations
نویسنده
چکیده
We show that under mild assumptions on the learning problem, one can obtain a fast error rate for every reasonable fixed target function even if the base class is not convex. To that end, we show that in such cases the excess loss class satisfies a Bernstein type condition.
منابع مشابه
Fast Reconstruction of SAR Images with Phase Error Using Sparse Representation
In the past years, a number of algorithms have been introduced for synthesis aperture radar (SAR) imaging. However, they all suffer from the same problem: The data size to process is considerably large. In recent years, compressive sensing and sparse representation of the signal in SAR has gained a significant research interest. This method offers the advantage of reducing the sampling rate, bu...
متن کاملA SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning
In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic proximal-gradient algorith...
متن کاملImplicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion and Blind Deconvolution
Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. Due to the highly nonconvex nature of the empirical loss, stateof-the-art procedures often require proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descen...
متن کاملA SMART STOCHASTIC ALGORITHM FOR NONCONVEX OPTIMIZATION A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning
Machine learning theory typically assumes that training data is unbiased and not adversarially generated. When real training data deviates from these assumptions, trained models make erroneous predictions, sometimes with disastrous effects. Robust losses, such as the huber norm, were designed to mitigate the effects of such contaminated data, but they are limited to the regression context. In t...
متن کاملOn Geometric Upper Bounds for Positioning Algorithms in Wireless Sensor Networks
This paper studies the possibility of upper bounding the position error of an estimate for range based positioning algorithms in wireless sensor networks. In this study, we argue that in certain situations when the measured distances between sensor nodes are positively biased, e.g., in non-line-of-sight conditions, the target node is confined to a closed bounded convex set (a feasible set) whic...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- J. Complexity
دوره 24 شماره
صفحات -
تاریخ انتشار 2008